55 research outputs found

    An Approach to Model Resources Rationalisation in Hybrid Clouds through Users Activity Characterisation

    Get PDF
    In recent years, some strategies (e.g., server consolidation by means of virtualisation techniques) helped the managers of large Information Technology (IT) infrastructures to limit, when possible, the use of hardware resources in order to provide reliable services and to reduce the Total Cost of Ownership (TCO) of such infrastructures. Moreover, with the advent of Cloud computing, a resource usage rationalisation can be pursued also for the users applications, if this is compatible with the Quality of Service (QoS) which must be guaranteed. In this perspective, modern datacenters are “elastic”, i.e., able to shrink or enlarge the number of local physical or virtual resources from private/public Clouds. Moreover, many of large computing environments are integrated in distributed computing environment as the grid and cloud infrastructures. In this document, we report some advances in the realisation of a utility, we named Adaptive Scheduling Controller (ASC) which, interacting with the datacenter resource manager, allows an effective and efficient usage of resources, also by means of users jobs classification. Here, we focus both on some data mining algorithms which allows to classify the users activity and on the mathematical formalisation of the functional used by ASC to find the most suitable configuration for the datacenter’s resource manager. The presented case study concerns the SCoPE infrastructure, which has a twofold role: local computing resources provider for the University of Naples Federico II and remote resources provider for both the Italian Grid Infrastructure (IGI) and the European Grid Infrastructure (EGI) Federated Cloud

    About the granularity portability of block-based Krylov methods in heterogeneous computing environments

    No full text
    Large-scale problems in engineering and science often require the solution of sparse linear algebra problems and the Krylov subspace iteration methods (KM) have led to a major change in how users deal with them. But, for these solvers to use extreme-scale hardware efficiently a lot of work was spent to redesign both the KM algorithms and their implementations to address challenges like extreme concurrency, complex memory hierarchies, costly data movement, and heterogeneous node architectures. All the redesign approaches bases the KM algorithm on block-based strategies which lead to the Block-KM (BKM) algorithm which has high granularity (i.e., the ratio of computation time to communication time). The work proposes novel parallel revisitation of the modules used in BKM which are based on the overlapping of communication and computation. Such revisitation is evaluated by a model of their granularity and verified on the basis of a case study related to a classical problem from numerical linear algebra

    A Scalable Space-Time Domain Decomposition Approach for Solving Large Scale Nonlinear Regularized Inverse Ill Posed Problems in 4D Variational Data Assimilation

    Get PDF
    We address the development of innovative algorithms designed to solve the strong-constraint Four Dimensional Variational Data Assimilation (4DVar DA) problems in large scale applications. We present a space-time decomposition approach which employs the whole domain decomposition, i.e. both along the spacial and temporal direction in the overlapping case, and the partitioning of both the solution and the operator. Starting from the global functional defined on the entire domain, we get to a sort of regularized local functionals on the set of sub domains providing the order reduction of both the predictive and the Data Assimilation models. The algorithm convergence is developed. Performance in terms of reduction of time complexity and algorithmic scalability is discussed on the Shallow Water Equations on the sphere. The number of state variables in the model, the number of observations in an assimilation cycle, as well as numerical parameters as the discretization step in time and in space domain are defined on the basis of discretization grid used by data available at repository Ocean Synthesis/Reanalysis Directory of Hamburg University

    Implementing effective data management policies in distributed and grid computing environments

    No full text
    A common programming model in distributed and grid computing is the client/server paradigm, where the client submits requests to several geographically remote servers for executing already deployed applications on its own data. In this context it is andatory to avoid unnecessary data transfer because the data set can be very large. This work addresses the problem of implementing a strategy for data management in case of data dependencies among subproblems in the same application. To this purpose, some minor changes have been introduced to the Distributed Storage Infrastructure in NetSolve distributed computing environment

    Towards a parallel component for imaging in PETSc programming environment: a case study in 3-D echocardiography

    No full text
    A key issue in designing application codes which are able to effectively take advantage of the available high performance computing resources is to deal with the software management and development complexity. This is even more true for imaging applications. This work is the first dowel in a rather wide mosaic, aimed at construction of a PSE (Problem Solving Environment) oriented to imaging applications. We discuss computational efforts towards the development of a distributed software environment, enabling the transparent use of high performance computers and storage systems for 3-D Echocardiographic sequences denoising via nonlinear diffusion filtering. More precisely, we describe a component-based approach for the development of an integrated software environment relying on the Portable, Extensible Toolkit for Scientific Computation (PETSc). Our approach uses a distributed memory model where we hide, within the PETSc parallel objects, the details of internode communications while intranode communications have been handled at a higher level. We report some experiences we made on an in vivo acquired 3-D Echocardiographic sequence obtained by means of a rotational acquisition technique using Tomtec Imaging system
    • …
    corecore